Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Transl Vis Sci Technol ; 12(7): 10, 2023 07 03.
Artigo em Inglês | MEDLINE | ID: mdl-37428131

RESUMO

Purpose: To examine deep learning (DL)-based methods for accurate segmentation of geographic atrophy (GA) lesions using fundus autofluorescence (FAF) and near-infrared (NIR) images. Methods: This retrospective analysis utilized imaging data from study eyes of patients enrolled in Proxima A and B (NCT02479386; NCT02399072) natural history studies of GA. Two multimodal DL networks (UNet and YNet) were used to automatically segment GA lesions on FAF; segmentation accuracy was compared with annotations by experienced graders. The training data set comprised 940 image pairs (FAF and NIR) from 183 patients in Proxima B; the test data set comprised 497 image pairs from 154 patients in Proxima A. Dice coefficient scores, Bland-Altman plots, and Pearson correlation coefficient (r) were used to assess performance. Results: On the test set, Dice scores for the DL network to grader comparison ranged from 0.89 to 0.92 for screening visit; Dice score between graders was 0.94. GA lesion area correlations (r) for YNet versus grader, UNet versus grader, and between graders were 0.981, 0.959, and 0.995, respectively. Longitudinal GA lesion area enlargement correlations (r) for screening to 12 months (n = 53) were lower (0.741, 0.622, and 0.890, respectively) compared with the cross-sectional results at screening. Longitudinal correlations (r) from screening to 6 months (n = 77) were even lower (0.294, 0.248, and 0.686, respectively). Conclusions: Multimodal DL networks to segment GA lesions can produce accurate results comparable with expert graders. Translational Relevance: DL-based tools may support efficient and individualized assessment of patients with GA in clinical research and practice.


Assuntos
Aprendizado Profundo , Atrofia Geográfica , Humanos , Estudos Transversais , Fundo de Olho , Atrofia Geográfica/diagnóstico por imagem , Estudos Retrospectivos , Estudos Clínicos como Assunto
2.
Ophthalmol Sci ; 3(4): 100319, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37304043

RESUMO

Purpose: Neovascular age-related macular degeneration (nAMD) shows variable treatment response to intravitreal anti-VEGF. This analysis compared the potential of different artificial intelligence (AI)-based machine learning models using OCT and clinical variables to accurately predict at baseline the best-corrected visual acuity (BCVA) at 9 months in response to ranibizumab in patients with nAMD. Design: Retrospective analysis. Participants: Baseline and imaging data from patients with subfoveal choroidal neovascularization secondary to age-related macular dengeration. Methods: Baseline data from 502 study eyes from the HARBOR (NCT00891735) prospective clinical trial (monthly ranibizumab 0.5 and 2.0 mg arms) were pooled; 432 baseline OCT volume scans were included in the analysis. Seven models, based on baseline quantitative OCT features (Least absolute shrinkage and selection operator [Lasso] OCT minimum [min], Lasso OCT 1 standard error [SE]); on quantitative OCT features and clinical variables at baseline (Lasso min, Lasso 1SE, CatBoost, RF [random forest]); or on baseline OCT images only (deep learning [DL] model), were systematically compared with a benchmark linear model of baseline age and BCVA. Quantitative OCT features were derived by a DL segmentation model on the volume images, including retinal layer volumes and thicknesses, and retinal fluid biomarkers, including statistics on fluid volume and distribution. Main Outcome Measures: Prognostic ability of the models was evaluated using coefficient of determination (R2) and median absolute error (MAE; letters). Results: In the first cross-validation split, mean R2 (MAE) of the Lasso min, Lasso 1SE, CatBoost, and RF models was 0.46 (7.87), 0.42 (8.43), 0.45 (7.75), and 0.43 (7.60), respectively. These models ranked higher than or similar to the benchmark model (mean R2, 0.41; mean MAE, 8.20 letters) and better than OCT-only models (mean R2: Lasso OCT min, 0.20; Lasso OCT 1SE, 0.16; DL, 0.34). The Lasso min model was selected for detailed analysis; mean R2 (MAE) of the Lasso min and benchmark models for 1000 repeated cross-validation splits were 0.46 (7.7) and 0.42 (8.0), respectively. Conclusions: Machine learning models based on AI-segmented OCT features and clinical variables at baseline may predict future response to ranibizumab treatment in patients with nAMD. However, further developments will be needed to realize the clinical utility of such AI-based tools. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

3.
Ophthalmol Retina ; 7(3): 243-252, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36038116

RESUMO

OBJECTIVE: To develop deep learning models for annualized geographic atrophy (GA) growth rate prediction using fundus autofluorescence (FAF) images and spectral-domain OCT volumes from baseline visits, which can be used for prognostic covariate adjustment to increase power of clinical trials. DESIGN: This retrospective analysis estimated GA growth rate as the slope of a linear fit on all available measurements of lesion area over a 2-year period. Three multitask deep learning models-FAF-only, OCT-only, and multimodal (FAF and OCT)-were developed to predict concurrent GA area and annualized growth rate. PARTICIPANTS: Patients were from prospective and observational lampalizumab clinical trials. METHODS: The 3 models were trained on the development data set, tested on the holdout set, and further evaluated on the independent test sets. Baseline FAF images and OCT volumes from study eyes of patients with bilateral GA (NCT02247479; NCT02247531; and NCT02479386) were split into development (1279 patients/eyes) and holdout (443 patients/eyes) sets. Baseline FAF images from study eyes of NCT01229215 (106 patients/eyes) and NCT02399072 (169 patients/eyes) were used as independent test sets. MAIN OUTCOME MEASURES: Model performance was evaluated using squared Pearson correlation coefficient (r2) between observed and predicted lesion areas/growth rates. Confidence intervals were calculated by bootstrap resampling (B = 10 000). RESULTS: On the holdout data set, r2 (95% confidence interval) of the FAF-only, OCT-only, and multimodal models for GA lesion area prediction was 0.96 (0.95-0.97), 0.91 (0.87-0.95), and 0.94 (0.92-0.96), respectively, and for GA growth rate prediction was 0.48 (0.41-0.55), 0.36 (0.29-0.43), and 0.47 (0.40-0.54), respectively. On the 2 independent test sets, r2 of the FAF-only model for GA lesion area was 0.98 (0.97-0.99) and 0.95 (0.93-0.96), and for GA growth rate was 0.65 (0.52-0.75) and 0.47 (0.34-0.60). CONCLUSIONS: We show the feasibility of using baseline FAF images and OCT volumes to predict individual GA area and growth rates using a multitask deep learning approach. The deep learning-based growth rate predictions could be used for covariate adjustment to increase power of clinical trials. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found after the references.


Assuntos
Aprendizado Profundo , Atrofia Geográfica , Humanos , Estudos Prospectivos , Estudos Retrospectivos , Tomografia de Coerência Óptica/métodos , Angiofluoresceinografia/métodos , Imagem Multimodal
4.
Transl Vis Sci Technol ; 9(2): 51, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32974088

RESUMO

Purpose: To develop deep learning (DL) models to predict best-corrected visual acuity (BCVA) from optical coherence tomography (OCT) images from patients with neovascular age-related macular degeneration (nAMD). Methods: Retrospective analysis of OCT images and associated BCVA measurements from the phase 3 HARBOR trial (NCT00891735). DL regression models were developed to predict BCVA at the concurrent visit and 12 months from baseline using OCT images. Binary classification models were developed to predict BCVA of Snellen equivalent of <20/40, <20/60, and ≤20/200 at the concurrent visit and 12 months from baseline. Results: The regression model to predict BCVA at the concurrent visit had R2 = 0.67 (root-mean-square error [RMSE] = 8.60) in study eyes and R2 = 0.84 (RMSE = 9.01) in fellow eyes. The best classification model to predict BCVA at the concurrent visit had an area under the receiver operating characteristic curve (AUC) of 0.92 in study eyes and 0.98 in fellow eyes. The regression model to predict BCVA at month 12 using baseline OCT had R2 = 0.33 (RMSE = 14.16) in study eyes and R2 = 0.75 (RMSE = 11.27) in fellow eyes. The best classification model to predict BCVA at month 12 had AUC = 0.84 in study eyes and AUC = 0.96 in fellow eyes. Conclusions: DL shows promise in predicting BCVA from OCTs in nAMD. Further research should elucidate the utility of models in clinical settings. Translational Relevance: DL models predicting BCVA could be used to enhance understanding of structure-function relationships and develop more efficient clinical trials.


Assuntos
Aprendizado Profundo , Tomografia de Coerência Óptica , Humanos , Injeções Intravítreas , Estudos Retrospectivos , Acuidade Visual
5.
J Digit Imaging ; 32(2): 228-233, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30465142

RESUMO

Applying state-of-the-art machine learning techniques to medical images requires a thorough selection and normalization of input data. One of such steps in digital mammography screening for breast cancer is the labeling and removal of special diagnostic views, in which diagnostic tools or magnification are applied to assist in assessment of suspicious initial findings. As a common task in medical informatics is prediction of disease and its stage, these special diagnostic views, which are only enriched among the cohort of diseased cases, will bias machine learning disease predictions. In order to automate this process, here, we develop a machine learning pipeline that utilizes both DICOM headers and images to predict such views in an automatic manner, allowing for their removal and the generation of unbiased datasets. We achieve AUC of 99.72% in predicting special mammogram views when combining both types of models. Finally, we apply these models to clean up a dataset of about 772,000 images with expected sensitivity of 99.0%. The pipeline presented in this paper can be applied to other datasets to obtain high-quality image sets suitable to train algorithms for disease detection.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado de Máquina , Mamografia/classificação , Mamografia/métodos , Automação , Conjuntos de Dados como Assunto , Feminino , Humanos , Sistemas de Informação em Radiologia , Sensibilidade e Especificidade
6.
Radiology ; 290(2): 456-464, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30398430

RESUMO

Purpose To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 (18F) fluorodeoxyglucose (FDG) PET of the brain and compare its performance to that of radiologic readers. Materials and Methods Prospective 18F-FDG PET brain images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (2109 imaging studies from 2005 to 2017, 1002 patients) and retrospective independent test set (40 imaging studies from 2006 to 2016, 40 patients) were collected. Final clinical diagnosis at follow-up was recorded. Convolutional neural network of InceptionV3 architecture was trained on 90% of ADNI data set and tested on the remaining 10%, as well as the independent test set, with performance compared to radiologic readers. Model was analyzed with sensitivity, specificity, receiver operating characteristic (ROC), saliency map, and t-distributed stochastic neighbor embedding. Results The algorithm achieved area under the ROC curve of 0.98 (95% confidence interval: 0.94, 1.00) when evaluated on predicting the final clinical diagnosis of AD in the independent test set (82% specificity at 100% sensitivity), an average of 75.8 months prior to the final diagnosis, which in ROC space outperformed reader performance (57% [four of seven] sensitivity, 91% [30 of 33] specificity; P < .05). Saliency map demonstrated attention to known areas of interest but with focus on the entire brain. Conclusion By using fluorine 18 fluorodeoxyglucose PET of the brain, a deep learning algorithm developed for early prediction of Alzheimer disease achieved 82% specificity at 100% sensitivity, an average of 75.8 months prior to the final diagnosis. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Larvie in this issue.


Assuntos
Doença de Alzheimer/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Disfunção Cognitiva/diagnóstico por imagem , Feminino , Fluordesoxiglucose F18/uso terapêutico , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...